67 research outputs found

    Constraints in binary semantical networks

    Get PDF
    Information Systems;Management Information Systems;Networks;management information systems

    Least Generalizations and Greatest Specializations of Sets of Clauses

    Get PDF
    The main operations in Inductive Logic Programming (ILP) are generalization and specialization, which only make sense in a generality order. In ILP, the three most important generality orders are subsumption, implication and implication relative to background knowledge. The two languages used most often are languages of clauses and languages of only Horn clauses. This gives a total of six different ordered languages. In this paper, we give a systematic treatment of the existence or non-existence of least generalizations and greatest specializations of finite sets of clauses in each of these six ordered sets. We survey results already obtained by others and also contribute some answers of our own. Our main new results are, firstly, the existence of a computable least generalization under implication of every finite set of clauses containing at least one non-tautologous function-free clause (among other, not necessarily function-free clauses). Secondly, we show that such a least generalization need not exist under relative implication, not even if both the set that is to be generalized and the background knowledge are function-free. Thirdly, we give a complete discussion of existence and non-existence of greatest specializations in each of the six ordered languages.Comment: See http://www.jair.org/ for any accompanying file

    Term partitions and minimal generalizations of clauses

    Get PDF
    Term occurrences of any clause C are determined by their positions. The set of all term partitions defined on subsets of term occurrences of C form a partially ordered set. This poset is isomorphic to the set of all generalizations of C. The structure of this poset can be inferred from the term occurrences in C alone. We can apply these constructions in this poset in machine learning

    Flattening, generalizations of clauses and absorption algorithms

    Get PDF
    In predicate logic, flattening can be used to replace terms with functions by variables. It can also be used for expressing absorption in inverse resolution. This has been done by Rouveirol and Puget. In this article three kinds of absorption algorithms are compared

    The V- and W-operators in inverse resolutions

    Get PDF
    This article gives algorithms for V- and W-operators in inverse resolution. It discusses also the completeness of these algorithms

    Generalizing Refinement Operators to Learn Prenex Conjunctive Normal Forms

    Get PDF
    Inductive Logic Programming considers almost exclusively universally quantied theories. To add expressiveness, prenex conjunctive normal forms (PCNF) with existential variables should also be considered. ILP mostly uses learning with refinement operators. To extend refinement operators to PCNF, we should first do so with substitutions. However, applying a classic substitution to a PCNF with existential variables, one often obtains a generalization rather than a specialization. In this article we define substitutions that specialize a given PCNF and a weakly complete downward refinement operator. Moreover, we analyze the complexities of this operator in different types of languages and search spaces. In this way we lay a foundation for learning systems on PCNF. Based on this operator, we have implemented a simple learning system PCL on some type of PCNF.learning;PCNF;completeness;refinement;substitutions

    Complexity dimensions and learnability

    Get PDF
    A stochastic model of learning from examples has been introduced by Valiant [1984]. This PAC-learning model (PAC = probably approximately correct) reflects differences in complexity of concept classes, i.e. very complex classes are not efficiently PAC-learnable. Blumer et al. [1989] found, that efficient PAC-learnability depends on the size of the Vapnik Chervonenkis dimension [Vapnik & Chervonenkis, 1971] of a class. We will first discuss this dimension and give an algorithm to compute it, in order to provide the reader with the intuitive idea behind it. Natarajan [1987] defines a new, equivalent dimension is defined for well-ordered classes. These well-ordered classes happen to satisfy a general condition, that is sufficient for the possible construction of a number of equivalent dimensions. We will give this condition, as well as a generalized notion of an equivalent dimension. Also, a relatively efficient algorithm for the calculation of one such dimension for well-ordered classes is given

    Towards a proof of the Kahn principle for linear dynamic networks

    Get PDF
    We consider dynamic Kahn-like data flow networks, i.e. networks consisting of deterministic processes each of which is able to expand into a subnetwork. The Kahn principle states that such networks are deterministic, i.e. that for each network we have that each execution provided with the same input delivers the same output. Moreover, the principle states that the output streams of such networks can be obtained as the smallest fixed point of a suitable operator derived from the network specification. This paper is meant as a first step towards a proof of this principle. For a specific subclass of dynamic networks, linear arrays of processes, we define a transition system yielding an operational semantics which defines the meaning of a net as the set of all possible interleaved executions. We then prove that, although on the execution level there is much nondeterminism, this nondeterminism disappears when viewing the system as a transformation from an input stream to an output stream. This result is obtained from the graph of all computations. For any configuration such a graph can be constructed. All computation sequences that start from this configuration and that are generated by the operational semantics are embedded in it

    The specialization problem and the completeness of unfolding

    Get PDF
    We discuss the problem of specializing a definite program with respect to sets of positive and negative examples, following Bostrom and Idestam-Almquist. This problem is very relevant in the field of inductive learning. First we show that there exist sets of examples that have no correct program, i.e., no program which implies all positive and no negative examples. Hence it only makes sense to talk about specialization problems for which a solution (a correct program) exists. To solve such problems, we first introduce UD1-specialization, based upon the transformation rule unfolding. We show UD1-specialization is incomplete - some solvable specialization problems do not have a UD1-specialization as solution - and generalize it to the stronger UD2-specialization. UD2 also turns out to be incomplete. An analysis of program specialization, using the subsumption theorem for SLD-resolution, shows the reason for this incompleteness. Based on that analysis, we then define UDS-specialization (a generalization of UD2-specialization), and prove that any specialization problem has a UDS-specialization as a solution. We also discuss the relationship between this specialization technique, and the generalization technique based on inverse resolution. Finally, we go into several more implementational matters, which outline an interesting topic for future research

    Constructing refinement operators by decomposing logical implication

    Get PDF
    Inductive learning models [Plotkin 1971; Shapiro 1981] often use a search space of clauses, ordered by a generalization hierarchy. To find solutions in the model, search algorithms use different generalization and specialization operators. In this article we will decompose the quasi-ordering induced by logical implication into six increasingly weak orderings. The difference between two successive orderings will be small, and can therefore be understood easily. Using this decomposition, we will describe upward and downward refinement operators for all orderings, including thetatheta-subsumption and logical implication
    • …
    corecore